10 research outputs found

    Rectified Gaussian Scale Mixtures and the Sparse Non-Negative Least Squares Problem

    Full text link
    In this paper, we develop a Bayesian evidence maximization framework to solve the sparse non-negative least squares (S-NNLS) problem. We introduce a family of probability densities referred to as the Rectified Gaussian Scale Mixture (R- GSM) to model the sparsity enforcing prior distribution for the solution. The R-GSM prior encompasses a variety of heavy-tailed densities such as the rectified Laplacian and rectified Student- t distributions with a proper choice of the mixing density. We utilize the hierarchical representation induced by the R-GSM prior and develop an evidence maximization framework based on the Expectation-Maximization (EM) algorithm. Using the EM based method, we estimate the hyper-parameters and obtain a point estimate for the solution. We refer to the proposed method as rectified sparse Bayesian learning (R-SBL). We provide four R- SBL variants that offer a range of options for computational complexity and the quality of the E-step computation. These methods include the Markov chain Monte Carlo EM, linear minimum mean-square-error estimation, approximate message passing and a diagonal approximation. Using numerical experiments, we show that the proposed R-SBL method outperforms existing S-NNLS solvers in terms of both signal and support recovery performance, and is also very robust against the structure of the design matrix.Comment: Under Review by IEEE Transactions on Signal Processin

    A Unified Framework for Sparse Non-Negative Least Squares using Multiplicative Updates and the Non-Negative Matrix Factorization Problem

    Full text link
    We study the sparse non-negative least squares (S-NNLS) problem. S-NNLS occurs naturally in a wide variety of applications where an unknown, non-negative quantity must be recovered from linear measurements. We present a unified framework for S-NNLS based on a rectified power exponential scale mixture prior on the sparse codes. We show that the proposed framework encompasses a large class of S-NNLS algorithms and provide a computationally efficient inference procedure based on multiplicative update rules. Such update rules are convenient for solving large sets of S-NNLS problems simultaneously, which is required in contexts like sparse non-negative matrix factorization (S-NMF). We provide theoretical justification for the proposed approach by showing that the local minima of the objective function being optimized are sparse and the S-NNLS algorithms presented are guaranteed to converge to a set of stationary points of the objective function. We then extend our framework to S-NMF, showing that our framework leads to many well known S-NMF algorithms under specific choices of prior and providing a guarantee that a popular subclass of the proposed algorithms converges to a set of stationary points of the objective function. Finally, we study the performance of the proposed approaches on synthetic and real-world data.Comment: To appear in Signal Processin

    Rectified Sparse Bayesian Learning and Effects and Limitations of Nuisance Regression in Functional MRI

    No full text
    This dissertation considers the problems of sparse signal recovery (SSR) and nuisance regression in functional MRI (fMRI). The first part of the dissertation introduces a Bayesian framework to recover sparse non-negative solutions in under-determined systems of linear equations. A novel class of probability density functions named Rectified Gaussian Scale Mixtures (R-GSM) is proposed to model the sparse non-negative solution of interest. A Bayesian inference algorithm called Rectified Sparse Bayesian Learning (R-SBL) is developed, which robustly recovers the solution in numerous experimental settings and outperforms the state-of-the-art SSR approaches by a large margin.The rest of the dissertation investigates the effects of nuisance regression in fMRI. Chapter 3 proposes a mathematical framework to investigate the effects of global signal regression (GSR). GSR is a widely used nuisance removal approach in resting-state fMRI, however its use has been controversial since it introduces artifactual anti-correlations between pairs of fMRI signals. The proposed framework shows that the main effects of GSR can be well-approximated as a temporal down-weighting or temporal censoring process, in which the data from time points with relatively large GS magnitudes are greatly attenuated (or censored) while data from time points with relatively small GS magnitudes are largely retained. The censoring approximation reveals that the anti-correlated networks are intrinsic to the brain's functional organization and are not simply an artifact of GSR.In Chapters 4 and 5, the effects of nuisance terms on the relationship between pairs of fMRI signals both before and after nuisance regression are investigated. It is shown that geometric norms of various nuisance regressors can significantly influence the correlation-based functional connectivity (FC) estimates in both static FC and dynamic FC studies. It is demonstrated that nuisance regression is largely ineffective in removing the significant correlations observed between FC estimates and nuisance norms. Consequently, a mathematical bound is derived on the difference between correlation coefficients before and after nuisance regression. This bound restricts the removal of nuisance norm effects from FC estimates

    Rectified Gaussian Scale Mixtures and the Sparse Non-Negative Least Squares Problem

    No full text
    corecore